India’s Synthetic Media Rules Build Enforcement on the Wrong Foundation
On 20 February 2026, India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 come into force. The rules, notified by the Ministry of Electronics and Information Technology (MeitY) on 10 February, introduce India’s first regulations for synthetic media (referred to as “synthetically generated information” or SGI in the rules). They mandate labelling, provenance metadata, automated verification by platforms, and drastically shorten the time platforms have to remove flagged content.
In November 2025, WITNESS submitted comments to MeitY on the draft rules after consulting with local civil society. We drew on nearly a decade of global research and advocacy on synthetic media, content provenance, and human rights. We made five specific recommendations. Some were partially adopted: the rules now apply only to audio and visual content (not all AI outputs), routine AI-assisted tasks like color correction, noise reduction, transcription, and formatting are explicitly excluded, and an impractical requirement to cover 10% of content with a visible label has been removed. These are genuine improvements, and we welcome the government’s responsiveness to civil society input. However, the final rules contain critical gaps that were not addressed, and introduce new provisions that were not part of the public consultation.
“India has the right to regulate synthetic media, and we recognize genuine progress in the final rules,” said Sam Gregory, the Executive Director of WITNESS. “But provenance that doesn’t travel across platforms isn’t provenance, and detection tools that can’t reliably tell synthetic from authentic content shouldn’t be the basis for legal enforcement. These rules need open standards, shared responsibility across the AI pipeline, and safeguards that protect the people provenance is supposed to serve.”
These concerns are compounded by drastically compressed enforcement timelines, the absence of due-process safeguards, and new provisions not present in the October 2025 draft, including pre-publication filtering obligations and a mechanism for disclosing user identities to private complainants.
Provenance needs to be interoperable, secure, and rights-respecting
WITNESS has long advocated for provenance as one essential signal within the broader transparency ecosystem, and for governments to ground their regulations in open, interoperable, and secure standards rather than developing isolated national frameworks. The most developed open standard for content provenance today is C2PA’s Content Credentials, the closest existing standard to meeting all three requirements: interoperable by design, secured through cryptographic signing, and compatible with rights-respecting implementation through data minimization. This approach is already being adopted into the EU AI Act’s Code of Practice on Transparency and California’s AI Transparency Act (SB 942, as amended by AB 853), which requires provenance data to be compliant with widely adopted, industry-accepted specifications from an established standards-setting body. This approach ensures interoperability without mandating a specific technology, a recommendation WITNESS advocated for and succeeded in pushing through during the legislative process.
The rules require that SGI be embedded with “permanent metadata or other appropriate technical provenance mechanisms, to the extent technically feasible, including a unique identifier” (Rule 3(3)(a)(ii)). But they make no reference to the need of leveraging widely accepted open standards such as C2PA, JPEG Trust, or the emerging ISO 22144. The “technically feasible” qualifier is undefined. Each intermediary is left to implement its own proprietary metadata scheme with no interoperability requirement. For content that crosses jurisdictions, provenance signals embedded in India may be unreadable elsewhere. The rules also specify no verification mechanism for provenance metadata, with no requirement for cryptographic signing or other means of confirming that embedded data has not been altered. The anti-tampering provision in Rule 3(3)(b) prohibits removal of metadata, but without a secure verification layer, there is no way to detect whether it has been modified.
More fundamentally, the rules’ provenance model answers the wrong question. The unique identifier is designed to “identify the computer resource of the intermediary,” not to document how the content was created or modified. In our work on the EU Code of Practice, WITNESS has advocated for a “recipe” approach to provenance: understanding the ingredients of AI and human contribution in content and how they were added. C2PA-style provenance tracks the chain of custody of the content itself, recording each edit, tool, and transformation. India’s approach traces to the platform, not the process. For a journalist using AI-assisted translation, or a human rights organization using generative tools to subtitle documentation footage, their content becomes permanently tied to a platform’s identifier without any record of what the AI actually did. This is platform traceability not content transparency and it reinforces our core criticism that these rules remain too platform-centric.
The rules are also silent on privacy. There is no data minimization requirement or protections to prevent linking identifiers back to individual users, and no consent mechanism. In our submissions to the EU’s Code of Practice on Transparency, WITNESS has argued that personally identifiable information should not be embedded in provenance data by default and that control over such data must rest with rights-holders. India’s rules contain none of these protections. Nothing prevents a government from compelling intermediaries to map identifiers back to specific user accounts, and the anti-tampering provision means that if provenance metadata enables identification, there is no lawful basis to correct or redact it.
Building enforcement on detection that does not work
The second serious concern is the rules’ reliance on automated AI detection as a legal compliance mechanism. Rule 4(1A)(b) requires significant social media intermediaries (SSMIs) to “deploy appropriate technical measures, including automated tools or other suitable mechanisms, to verify the accuracy” of user declarations on whether content is SGI.
WITNESS’ TRIED benchmark demonstrates that current AI detection tools produce inconsistent results across file types, modalities, and contexts, with significant false-positive and false-negative rates. Our Deepfakes Rapid Response Force has documented these limitations in real-world verification scenarios. Detection tools are improving, but they are nowhere near the reliability threshold needed to serve as the basis for legal compliance obligations. Building enforcement on this foundation means that authentic human rights documentation could be incorrectly flagged as synthetic, while sophisticated deepfakes may evade detection. The rules do not address what happens when these tools get it wrong. There is no due process for correcting mistakes, notifying users, or allowing them to challenge a decision.
The stakes of getting this wrong are high. If a platform is found to have knowingly allowed or failed to act on synthetic content, it is deemed to have failed its due diligence obligations, risking the loss of safe harbor protection under Section 79 of the IT Act. Indian civil society organisations have raised serious concerns about the compatibility of these proactive monitoring obligations with existing safe harbor protections. The problem is compounded by the fact that platforms are expected to meet these obligations using detection tools whose reliability has not been established.
The rules place the burden of identification solely on downstream platforms. The triggering language in Rule 3(3), “an intermediary [that] offers a computer resource which may enable, permit, or facilitate” the creation or dissemination of SGI, is broad enough to potentially encompass AI developers and model providers. But because the obligation is framed within the IT Act’s intermediary architecture, in practice it falls on platforms hosting content, not the developers and model providers who build the underlying systems. Neither group faces a corresponding obligation to embed provenance signals at the point of creation. WITNESS’ submission called for developers, deployers, and intermediaries to share responsibility across the AI pipeline. The final rules did not adopt this approach.
The result is that India’s rules mandate detection as the primary enforcement mechanism while building a provenance system that cannot support it. Detection is left to do the work that provenance infrastructure should be making easier on platforms that lack the upstream signals to do it well.
Compressed timelines without safeguards
These concerns are compounded by drastically compressed enforcement timelines. Takedown orders must now be acted on within three hours (Rule 3(1)(d)), or two hours for intimate imagery (Rule 3(2)(b)). For some of the most harmful content such as child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII), swift action is warranted: the harm is compounded by each view for an identifiable victim. But takedown-centered frameworks alone won’t solve this. Once content has spread, the methods to contain it become challenging, which is why AI-generated NCII needs to be tackled at the creation stage. This means regulation targeting how NCII of real individuals is created, not just the platform distribution these IT rules focus on.
The European Union is actively testing this principle through its enforcement action against X/Grok for enabling the mass generation of non-consensual intimate deepfakes, and Brazil’s pending AI framework (PL 2338/2023) proposes a risk-based approach that would extend to systems capable of generating synthetic NCII. We would welcome India expanding its approach in the same direction. In the absence of such upstream safeguards, compressed timelines for CSAM and NCII remain an essential backstop but should not be treated as a standalone solution.
The problem is that the rules apply the same timeline indiscriminately. Because SGI is now included under all unlawful-act provisions (Rule 2(1A)), content involving any AI assistance can be targeted through the same three-hour window, whether the complaint concerns child exploitation or defamation. The incentive is to remove first and review later. In time-sensitive contexts such as elections, protests, or breaking news, this creates a concrete pathway for the liar’s dividend: legitimate documentation removed on the basis that it might be synthetic, with no mechanism for correction, user notification, or appeal.
The Santa Clara Principles on Transparency and Accountability in Content Moderation, endorsed by major platforms and developed by a global coalition of human rights organizations, set clear standards for these situations: user notification before or promptly after enforcement action, a meaningful right to appeal, and transparency reporting. The Principles recognize that where content is time-sensitive, appeal processes should be expedited, not eliminated. India’s rules meet none of these standards for SGI-flagged content. WITNESS has consistently called for these safeguards and more: for broader categories beyond CSAM and NCII, removal should be preceded by user notification, a contestation window, and supported by public reporting on removal volumes and error rates. We recommended these in our November 2025 submission. None were adopted.
New provisions raise additional concerns
The final rules also introduced provisions that were not present in the October 2025 draft and that raise serious concerns for expression and privacy.
Pre-publication filtering without upstream obligations
Rule 3(3)(a)(i) requires any intermediary that enables or facilitates synthetic content to deploy automated measures to prevent SGI that violates “any law for the time being in force.” As noted above, the ambiguous language here may potentially cover AI developers and model providers, but the intermediary framing means the obligation lands on platforms, not the developers who build the underlying systems. In practice, platforms must pre-screen content not just for the specific harms listed in the rule, such as CSAM or false documents, but for any violation of any Indian law, before publication. Users whose content is blocked receive no notification and have no way to challenge the decision.
If developers and model providers embedded interoperable provenance signals at the point of creation, platforms would have a more reliable basis for identifying synthetic content than automated detection alone. Enforcement could then focus on genuinely harmful content rather than requiring platforms to screen all AI-assisted output against the full breadth of the law. Pre-publication filtering should be narrowed to content categories where detection infrastructure is established and the harm is irreversible per view, such as known CSAM. For non-consensual intimate imagery, the harm equally demands urgent response, but the technical and normative preconditions for reliable pre-filtering are not yet met: automated systems cannot reliably define “intimate” content or determine consent, the sensitive databases these systems require raise serious privacy and security concerns, and well-documented biases in content classifiers risk disproportionately impacting marginalized communities. These questions require established norms before pre-filtering of NCII is implemented. South Korea’s experience demonstrates that effective response to AI-generated NCII is achievable through dedicated enforcement infrastructure and targeted criminal provisions, without requiring pre-publication filtering.
Where pre-publication filtering is applied to any category, it should include user notification and a meaningful contestation process.
Disclosure of user identity without due process
Rule 3(1)(ca)(ii)(III) allows platforms to disclose the identity of a user who has violated SGI provisions to a complainant who claims to be a victim or to be acting on behalf of a victim. While this is qualified by ‘in accordance with applicable law,’ there is no requirement to notify the user beforehand, no process for contesting the identification, and no clear threshold for validating the complainant’s claim. Where serious legal violations are involved, identity disclosure should be directed to law enforcement or ordered by a court, not handed to a private complainant. The provision as drafted creates a pathway that could be exploited to unmask journalists, human rights defenders, or civil society actors who rely on anonymity for their safety. Even where disclosure is court-ordered, the risks are acute in jurisdictions where law enforcement itself has been used to target journalists and activists. This reinforces why provenance systems should not embed personally identifiable information by default, a principle already established in this statement’s provenance analysis.
These concerns are particularly acute in India, where enforcement of the Digital Personal Data Protection Act, 2023 remains nascent and where personal data collected through national identity infrastructure has been used in ways that raise serious privacy concerns. Any implementation of identity disclosure provisions must incorporate robust privacy safeguards to avoid compounding these existing vulnerabilities. Like the filtering provisions, this requirement was not present in the October 2025 draft and was not subject to public consultation.
Four urgent changes
These rules come into force on 20 February 2026. India’s ambition to regulate synthetic media is significant, and we welcome the improvements made since the October 2025 draft. But as enacted, the framework relies on enforcement mechanisms that are technically unfit for purpose, while missing the opportunity to build on effective standards infrastructure. Four changes would make these rules more effective.
Standards-based provenance built for rights. India should ground its provenance requirements in open, interoperable, secure, and privacy-preserving standards rather than leaving each platform to develop proprietary metadata schemes, following California’s approach of requiring an industry-accepted standard without mandating a specific technology. Provenance data should not embed personally identifiable information by default, and must include data minimization safeguards.
Pipeline responsibility. Responsibility for transparency and provenance must be shared across the AI pipeline. Developers and model providers should be required to embed provenance signals at the point of creation, so that platforms are not left relying solely on detection tools to identify synthetic content after the fact. WITNESS’ own research, including our TRIED benchmark and Deepfakes Rapid Response Force, demonstrates that detection tools are not yet reliable enough to serve as a primary enforcement mechanism. They have a role to play, but only as a complement to provenance infrastructure, not a substitute for it.
Filtering scope. Pre-publication filtering should be narrowed to content categories where detection infrastructure is established, such as known CSAM, not extended to any violation of any law in force. For NCII, norms around consent determination, scope constraints, database security, and bias auditing must be established before pre-filtering is implemented.
Due process. Where content is removed or blocked, users must receive notification and have access to a meaningful contestation process. Identity disclosure provisions must require judicial authorization, not direct disclosure to private complainants, and users must be notified before their identity is shared. Compressed timelines for broader categories of content should be accompanied by the safeguards set out in the Santa Clara Principles: notification, appeal, and transparency reporting.
WITNESS will continue to engage with Indian civil society and policymakers on these issues, as we do in our ongoing work with the EU AI Act Code of Practice, international standards bodies, content provenance, and broader AI transparency initiatives. The communities most affected by these rules, including journalists, human rights defenders, and civil society actors who rely on digital tools to document and share their work, deserve regulation that protects rather than exposes them. India’s regulatory choices also carry particular weight for jurisdictions across the global majority that look to its IT framework as a model. We remain committed to working toward synthetic media regulation that is technically sound, globally interoperable, and effective at addressing genuine harms.